In this paper we focus on the task of rating solutions\nto a programming exercise. State-of-the-art rating methods\ngenerally examine each solution against an exhaustive set\nof test cases, typically designed manually. Hence an issue of\ncompleteness arises.We propose the application of bounded\nmodel checking to the automatic generation of test cases.\nThe experimental evaluation we have performed reveals a\nsubstantial increase in accuracy of ratings at a cost of a moderate\nincrease in computation resources needed. Most importantly,\napplication of model checking leads to the finding of\nerrors in solutions that would previously have been classified\nas correct.
Loading....